<-- Icons -->
  • People
  • Research
  • Projects
  • Publications
  • Resources
ViCoS Lab

GAPTRACK
Deep generative appearance modeling in visual tracking

postdoctoral project
July 2019 - December 2021

Funding

  • ARRS (Z2-1866)

Researchers

Luka Čehovin Zajc, PhD
Luka Čehovin Zajc, PhD

Scope

Predicting object state in video streams is one of the fundamental challenges of computer vision with numerous application domains. Knowing where the object is at a given point in time can help autonomous vehicles avoid obstacles, alert if elderly people fall at home, analyze performance in professional sport, discover the behavior of animals, or help robots actively learn new concepts. These are just a few scenarios where methods that perform visual tracking can be used extensively. Yet, there are numerous open challenges that have to be solved to develop a general visual tracking method capable of handling scenarios, mentioned above. Visual object tracking without prior information about the object is an ill-posed problem, it cannot be solved by an on-line learning method alone for an arbitrary object. Humans, on the other hand, can solve complex tracking scenarios by relying on a massive amount of knowledge about the world accumulated through life-long learning. This knowledge contains info about object categories, their possible deformations, and appearance variations which are crucial for retaining a stable representation of the tracked object. In machine learning terms we can say that this knowledge is contained in a generative model of the object’s appearance. The challenge that we will address in this project is a robust design of such a generative model, training, and application in a visual tracking scenario. We believe that a generative appearance model of the entire object is a crucial step towards grounding visual object tracking in high-level concepts behind raw pixel values.

Workpackages

The work is divided into four work packages:

  • WP1: development of generative deep neural network models, suitable for appearance modeling of many object classes,
  • WP2: application of developed models to visual tracking,
  • WP3: training and testing data acquisition and generation,
  • WP4: dissemination.

Publications

  •  
    A modular toolkit for visual tracking performance evaluation
    Luka Čehovin Zajc
    SoftwareX, Elsevier, 2020
  •  
    Co-segmentation for visual object tracking
    Luka Čehovin Zajc
    ERK, 2020
  •  
    Performance Evaluation Methodology for Long-Term Single Object Tracking
    Alan Lukežič, Luka Čehovin Zajc, Tomáš Vojíř, Jiří Matas and Matej Kristan
    IEEE Transactions on Cybernetics, 2020
  •  
    The Eighth Visual Object Tracking VOT2020 Challenge Results
    Matej Kristan, Aleš Leonardis, Jiri Matas, Michael Felsberg, Roman Pflugfelder, Joni-Kristian Kamarainen, Luka Čehovin Zajc, Martin Danelljan, Alan Lukezic, et al.
    ECCV2020 workshops, 2020
  •  
    The Seventh Visual Object Tracking VOT2019 Challenge Results
    Matej Kristan, Jiri Matas, Aleš Leonardis, Michael Felsberg, Roman Pflugfelder, Joni-Kristian Kamarainen, Luka Čehovin Zajc, Ondrej Drbohlav, Alan Lukezic, et al.
    ICCV 2019 workshops, 2019

Resources

 
PixelPipes

Infinite data streams for deep learning.

 
MSKS

Schedule and run repeatable taks in Conda environments.

 
vot-toolkit

Toolkit to support visual tracking performance evaluation

Funding

arrs

Faculty of Computer and Information Science

Visual Cognitive Systems Laboratory

University of Ljubljana

Faculty of Computer and Information Science

Večna pot 113
SI-1000 Ljubljana
Slovenia
Tel.: +386 1 479 8245